Current Virtual Reality (VR) environments lack the rich haptic signals that humans experience during real-life interactions, such as the sensation of texture during lateral movement on a surface. Adding realistic haptic textures to VR environments requires a model that generalizes to variations of a user's interaction and to the wide variety of existing textures in the world. Current methodologies for haptic texture rendering exist, but they usually develop one model per texture, resulting in low scalability. We present a deep learning-based action-conditional model for haptic texture rendering and evaluate its perceptual performance in rendering realistic texture vibrations through a multi part human user study. This model is unified over all materials and uses data from a vision-based tactile sensor (GelSight) to render the appropriate surface conditioned on the user's action in real time. For rendering texture, we use a high-bandwidth vibrotactile transducer attached to a 3D Systems Touch device. The result of our user study shows that our learning-based method creates high-frequency texture renderings with comparable or better quality than state-of-the-art methods without the need for learning a separate model per texture. Furthermore, we show that the method is capable of rendering previously unseen textures using a single GelSight image of their surface.
translated by 谷歌翻译
Recent advances in upper limb prostheses have led to significant improvements in the number of movements provided by the robotic limb. However, the method for controlling multiple degrees of freedom via user-generated signals remains challenging. To address this issue, various machine learning controllers have been developed to better predict movement intent. As these controllers become more intelligent and take on more autonomy in the system, the traditional approach of representing the human-machine interface as a human controlling a tool becomes limiting. One possible approach to improve the understanding of these interfaces is to model them as collaborative, multi-agent systems through the lens of joint action. The field of joint action has been commonly applied to two human partners who are trying to work jointly together to achieve a task, such as singing or moving a table together, by effecting coordinated change in their shared environment. In this work, we compare different prosthesis controllers (proportional electromyography with sequential switching, pattern recognition, and adaptive switching) in terms of how they present the hallmarks of joint action. The results of the comparison lead to a new perspective for understanding how existing myoelectric systems relate to each other, along with recommendations for how to improve these systems by increasing the collaborative communication between each partner.
translated by 谷歌翻译
Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation, but the quality bar for medical and clinical applications is high. Today, attempts to assess models' clinical knowledge typically rely on automated evaluations on limited benchmarks. There is no standard to evaluate model predictions and reasoning across a breadth of tasks. To address this, we present MultiMedQA, a benchmark combining six existing open question answering datasets spanning professional medical exams, research, and consumer queries; and HealthSearchQA, a new free-response dataset of medical questions searched online. We propose a framework for human evaluation of model answers along multiple axes including factuality, precision, possible harm, and bias. In addition, we evaluate PaLM (a 540-billion parameter LLM) and its instruction-tuned variant, Flan-PaLM, on MultiMedQA. Using a combination of prompting strategies, Flan-PaLM achieves state-of-the-art accuracy on every MultiMedQA multiple-choice dataset (MedQA, MedMCQA, PubMedQA, MMLU clinical topics), including 67.6% accuracy on MedQA (US Medical License Exam questions), surpassing prior state-of-the-art by over 17%. However, human evaluation reveals key gaps in Flan-PaLM responses. To resolve this we introduce instruction prompt tuning, a parameter-efficient approach for aligning LLMs to new domains using a few exemplars. The resulting model, Med-PaLM, performs encouragingly, but remains inferior to clinicians. We show that comprehension, recall of knowledge, and medical reasoning improve with model scale and instruction prompt tuning, suggesting the potential utility of LLMs in medicine. Our human evaluations reveal important limitations of today's models, reinforcing the importance of both evaluation frameworks and method development in creating safe, helpful LLM models for clinical applications.
translated by 谷歌翻译
Molecular and genomic properties are critical in selecting cancer treatments to target individual tumors, particularly for immunotherapy. However, the methods to assess such properties are expensive, time-consuming, and often not routinely performed. Applying machine learning to H&E images can provide a more cost-effective screening method. Dozens of studies over the last few years have demonstrated that a variety of molecular biomarkers can be predicted from H&E alone using the advancements of deep learning: molecular alterations, genomic subtypes, protein biomarkers, and even the presence of viruses. This article reviews the diverse applications across cancer types and the methodology to train and validate these models on whole slide images. From bottom-up to pathologist-driven to hybrid approaches, the leading trends include a variety of weakly supervised deep learning-based approaches, as well as mechanisms for training strongly supervised models in select situations. While results of these algorithms look promising, some challenges still persist, including small training sets, rigorous validation, and model explainability. Biomarker prediction models may yield a screening method to determine when to run molecular tests or an alternative when molecular tests are not possible. They also create new opportunities in quantifying intratumoral heterogeneity and predicting patient outcomes.
translated by 谷歌翻译
光活性虹膜复合物的应用广泛,因为它们的应用从照明到光催化。但是,从精确度和计算成本的角度来看,这些复合物的激发状态性能预测挑战了从头开始方法,例如时间依赖性密度功能理论(TDDFT),使高吞吐量虚拟筛选(HTVS)复杂化。相反,我们利用低成本的机器学习(ML)模型来预测光活性虹膜复合物的激发状态特性。我们使用1,380个虹膜复合物的实验数据来训练和评估ML模型,并确定最佳和最可转移的模型,是从低成本密度功能理论紧密结合计算的电子结构特征训练的模型。使用这些模型,我们预测所考虑的三个激发态性能,即磷光的平均发射能,激发态寿命和发射光谱积分,具有具有或取代TDDFT的精度。我们进行特征重要性分析,以确定哪些虹膜复杂属性控制激发状态的特性,并通过明确的例子来验证这些趋势。为了证明如何将ML模型用于HTV和化学发现的加速度,我们策划了一组新型的假设虹膜络合物,并确定了新磷剂设计的有希望的配体。
translated by 谷歌翻译
机器学习(ML)加速化学发现的两个突出挑战是候选分子或材料的合成性以及ML模型训练中使用的数据的保真度。为了应对第一个挑战,我们构建了一个假设的设计空间,为3250万转型金属复合物(TMC),其中所有组成片段(即金属和配体)和配体对称性都可以合成。为了应对第二项挑战,我们在雅各布梯子的多个梯级之间的23个密度功能近似之间搜索预测的共识。为了加快这3250万TMC的筛选,我们使用有效的全局优化来样本候选低自旋发色团,同时具有低吸收能和低静态相关性。尽管在这个大化的化学空间中的潜在发色团缺乏(即$ <$ 0.01 \%),但随着ML模型在积极学习过程中的改善,我们确定了高可能性(即$> $ 10 \%)的过渡金属发色团(即$> $ 10 \%)。这代表发现的1,000倍加速度,与几天而不是几年中的发现相对应。对候选发色团的分析揭示了对CO(III)和具有更大键饱和度的大型强野配体的偏爱。我们根据时间依赖性密度功能理论计算计算帕累托前沿上有希望的发色团的吸收光谱,并验证其中三分之二是否需要激发态特性。尽管这些复合物从未经过实验探索,但它们的组成配体在文献中表现出有趣的光学特性,体现了我们构建现实的TMC设计空间和主动学习方法的有效性。
translated by 谷歌翻译
我们介绍了一个机器人组装系统,该系统简化了从产品组件的CAD模型到完整编程和自适应组装过程的设计对制造工作流程。我们的系统(在CAD工具中)捕获了特定机器人工作电脑组装过程的意图,并生成了任务级指令的配方。通过将视觉传感与深度学习的感知模型相结合,机器人推断出从生成的配方中组装设计的必要动作。感知模型是直接从模拟训练的,从而使系统可以根据CAD信息识别各个部分。我们用两个机器人的工作栏演示了系统,以组装互锁的3D零件设计。我们首先在模拟中构建和调整组装过程,并验证生成的食谱。最后,真正的机器人工作电池使用相同的行为组装了设计。
translated by 谷歌翻译
与更苛刻但准确的相关波函数理论相比,由于其成本准确性的权衡,近似密度功能理论(DFT)已成为必不可少的。然而,迄今为止,尚未确定具有通用精度的单个密度函数近似(DFA),从而导致DFT产生的数据质量的不确定性。通过电子密度拟合和转移学习,我们构建了DFA推荐使用者,该DFA选择以系统特异性方式相对于黄金标准但过度良好的耦合群集理论的DFA。我们在垂直旋转分解能量评估中证明了这种推荐的方法,用于具有挑战性的过渡金属复合物。我们的推荐人可以预测表现最佳的DFA,并产生出色的精度(约2 kcal/mol),可用于化学发现,表现优于单个传递学习模型和一组48 dFA中的单个最佳功能。我们证明了DFA推荐剂对具有独特化学的实验合成化合物的可传递性。
translated by 谷歌翻译
抗微生物抗性(AMR)是日益增长的公共卫生威胁,估计每年造成超过1000万人死亡,在现状预测下,到2050年,全球经济损失了100万亿美元。这些损失主要是由于治疗失败的发病率和死亡率增加,医疗程序中的AMR感染以及归因于AMR的生活质量损失所致。已经提出了许多干预措施来控制AMR的发展并减轻其传播带来的风险。本文回顾了细菌AMR管理和控制的关键方面,这些方面可以利用人工智能,机器学习以及数学和统计建模等数据技术,这些领域在本世纪已经快速发展。尽管数据技术已成为生物医学研究的组成部分,但它们对AMR管理的影响仍然很小。我们概述了使用数据技术来打击AMR,详细介绍了四个互补类别的最新进展:监视,预防,诊断和治疗。我们在生物医学研究,临床实践和“一个健康”背景下使用数据技术提供了有关当前AMR控制方法的概述。我们讨论了数据技术的潜在影响和挑战在高收入和中等收入国家中面临的实施,并建议将这些技术更容易地整合到医疗保健和公共卫生中所需的具体行动,并建议使用具体的行动部门。
translated by 谷歌翻译
单细胞转录组学的分析通常依赖于聚类细胞,然后进行差异基因表达(DGE)来识别这些簇之间变化的基因。这些离散分析成功地确定了细胞类型和标记。但是,可能无法检测到细胞类型内部和之间的连续变化。我们提出了三种拓扑动机的数学方法,用于无监督的特征选择,这些方法可以同时在多个尺度上同时考虑离散和连续的转录模式。 eigenscores($ \ mathrm {eig} _i $)基于其与图形laplacian的频谱分解在数据中与低频内在图案的对应相对的对应。多尺度拉普拉斯评分(MLS)是一种无监督的方法,用于在数据中定位相关量表并选择在这些相应量表上相干表达的基因。持续的瑞利商(PRQ)采用了配备过滤的数据,允许在分叉过程中具有不同作用的基因(例如伪时间)。我们通过将它们应用于已发布的单细胞转录组数据集来证明这些技术的实用性。该方法验证了先前鉴定的基因并检测具有相干表达模式的其他基因。通过研究基因信号与基础空间的几何形状之间的相互作用,这三种方法给出了基因的多维排名和它们之间关系的可视化。
translated by 谷歌翻译